22 research outputs found

    Location dependent transaction for mobile environment

    Get PDF
    With recent advances of mobile and portable devices, more than one billion cellular phones in the world joined by other wireless handheld computing devices like personal digital assistants (PDAs) or pocket PCs, with this number of users there are significant opportunities for mobile commerce growth. Although mobile commerce enables access to goods and service regardless of the location of either buyer or seller, in many situations the specific location of the buyer and seller is critical to the transaction [1]. Also the time for transaction execution become increasingly important not from performance point view but also from the corresponding relationship between the data and location especially when the mobile user change its location dynamically. In this paper we aim to introduce a mobile transaction model that takes into consideration the location dependent transaction and the time constraint for mobile transaction execution

    Correspondence of OOP Refactoring Techniques in PL\SQL Environment: A Case Study of Agile Project Code

    Get PDF
    scrum is one of the most popular agile approaches for developing software, which has code quality challenges that affect software functionality, maintainability, reliability, usability, efficiency, and portability. However, many techniques are applied to enhance code quality by analyzing and reviewing the code using code conventions, standards, and refactoring. In the Scrum development framework, handling code quality issues requires focusing on the practices that cause such issues (i.e., short testing volume). Therefore, this study aims to investigate how refactoring affects scrum source code quality and code maintainability. This study followed an experimental approach, including a case study for a large and complex Oracle PL/SQL software program through its development phase. The source code of the chosen sprint is analyzed to evaluate its quality and identify the probability of code smell existence. Nine refactoring techniques are applied to the collected data to identify code smells for the chosen PL/SQL software. The study findings show that choosing a suitable refactoring technique positively affects the maintainability of sprint code

    Challenges of Cloud Computing in Jordanian Govt.: Insights from Telcos

    Get PDF
    Cloud computing offers many benefits to governments, including increased efficiency, flexibility, and cost savings. However, there are also significant challenges to adopting cloud computing services. In the case of the Jordanian government, some of these challenges include concerns about data security and privacy, lack of technical expertise, limited funding and resources, and cultural resistance to change. This paper examines the challenges faced by the Jordanian government in adopting cloud computing services and evaluates their impact on government institutions. The study collected data from three local telecommunications companies in Jordan to identify potential challenges and assess their significance through a questionnaire. The results indicated challenges that negatively affected cloud adoption, including performance, usability, and cost, as well as challenges that positively impacted adoption. Maintenance and information security challenges were rated as the most significant challenges. The study recommends promoting awareness, offering training programs, and conducting feasibility studies to overcome these challenges and improve cloud adoption. Future research should expand the study sample and investigate additional challenges impacting government organizations’ adoption of cloud computing services

    Synthetic generation of multidimensional data to improve classification model validity

    Get PDF
    This article aims to compare Generative Adversarial Network (GAN) models and feature selection methods for generating synthetic data in order to improve the validity of a classification model. The synthetic data generation technique involves generating new data samples from existing data to increase the diversity of the data and help the model generalize better. The multidimensional aspect of the data refers to the fact that it can have multiple features or variables that describe it. The GAN models have proven to be effective in preserving the statistical properties of the original data. However, the order of data augmentation and feature selection is crucial to build robust and accurate predictive models. By comparing the different GAN models with feature selection methods on multidimensional datasets, this article aims to determine the best combination to support the validity of a classification model in multidimensional data.</p

    An efficient machine-learning model based on data augmentation for pain intensity recognition

    No full text
    Pain is defined as “a distressing experience associated with actual or potential tissue damage with sensory, emotional, cognitive and social components”, knowing the exact level of pain experienced to have a critical impact for caregivers to make diagnosis and make he suitable treatment plan, but the available methods depend entirely on the patient self-report, which increase the difficulties of knowing the accurate level of pain experienced by the patient. Therefore, automating this process became an important issue, but due to the hardness of acquiring medical data, it became difficult to build a predictive model with good performance. Generative Adversarial Networks is a framework that generates artificial data with a distribution similar to the real data, by training two networks; the generator which tries to generate new samples similar to the real ones, and the discriminator which applies a traditional supervised classification to distinguish the augmented samples, the optimal case is when the discriminator cannot distinguish the augmented samples from the real samples. In this research, we generated data using Least Square Generative Adversarial Networks and the study the effect of applying feature selection on the data before the augmentation. Moreover, the approach was tested on a dataset that contains multi biopotential signals for different levels of pain

    Statistical-Based Heuristic for Scheduling of Independent Tasks in Cloud Computing

    No full text
    Cloud computing is an emerging and innovative technology that is used for solving large-scale complex problems. It considers as an extension to distributed and parallel computing. Additionally, it enables sharing, organizing and aggregation of computational machines to satisfy the user demands. One of the main goals of the task scheduling is to minimize the makespan (i.e. the overall processing time) and maximize the machine utilization. This paper addresses the problem of how to schedule many independent tasks when using different machines. It introduces two batch mode heuristics algorithms for scheduling independent task in the computational cloud environment, high mean absolute deviation first heuristic and QoS Guided Sufferage-HMADF heuristic. Besides, the paper presented other existing batch mode heuristics such as, Min-Min, Max-Min and Sufferage. The four heuristic modes are simulated and the experimental results are discussed using two performance measures, makespan and machine resource utilization

    Exploring Code Vulnerabilities through Code Reviews: An Empirical Study on OpenStack Nova

    Get PDF
    Effective code review is a critical aspect of software quality assurance, requiring a meticulous examination of code snippets to identify weaknesses and other quality issues. Unfortunately, the biggest threat to software quality is developers’ disregard for code-writing standards, which leads to code smells. Despite their importance, code smells are not always identified during code review, creating a need for an empirical study to uncover vulnerabilities in code reviews. This study aimed to explore vulnerabilities in code reviews by examining the OpenStack project, Nova. After analyzing 4873 review comments, we identified 187 comments related to possible vulnerabilities, and a pilot study confirmed 151 of them as vulnerabilities. Our findings revealed that injection vulnerability flaws were the most prevalent, while insecure deserialization was the least common. Our study also identified three primary reasons for vulnerabilities: developers’ knowledge of secure coding practices, unfamiliarity with existing code, and unintentional errors. In response to these vulnerabilities, reviewers suggested that developers fix the issues, and developers generally followed their recommendations. We recommend that developers receive training in secure coding practices to improve software quality, and those code review procedures include specific checks for common vulnerabilities. Additionally, it is essential to ensure that reviewers and developers communicate effectively to address vulnerabilities efficiently and effectively

    An Effective Negotiation Strategy for Quantitative and Qualitative Issues in Multi-Agent Systems

    No full text
    Automated negotiation is an efficient approach for interaction in multi-agent systems in which agents exchange offers and counteroffers to conclude an agreement. This paper addresses the problem of offer formulation during the interaction between buyer and seller software agents for the purpose of reaching an agreement over quantitative and qualitative issues at once. In order to improve the outcome of the negotiation process, a hybrid negotiation method is presented and verified. Offer formulation is based on fuzzy similarity and preference-based methods. The preference-based mechanism is used for quantitative issues, while the fuzzy similarity technique is used for qualitative issues. The preference-based mechanism takes into account the preferences of the opponent when generating offers; the agent makes greater concessions on the issues which the opponent prefers more. The fuzzy-similarity method formulates an offer that considers offering a deal that is more similar to the one received by the opponent during the last round of negotiation. The experiments consists of two parts. The first part compares the hybrid strategy with the basic one. The findings reveal that the hybrid strategy is better in all performance measures, namely, utility rate, agreement rate, and Nash product rate. The second part of the experimental work compares four mechanisms of offer generating mechanisms: basic, preference-based, fuzzy similarity, and hybrid. The results show that the hybrid negotiation strategy performs equal or better that other negotiation strategies. More details can be found in the paper
    corecore